The FinLab Toolkit

HUMAN CENTERED DESIGN | TESTING

A-B-n Testing

60Min+

A/B testing, at its most basic, is an experiment to compare two versions of a prototype (product or service) and figure which performs better. A/B/n testing involves testing at least three variations of something. The “n” refers to the number of variations that can be tested. While the term is generally used in the context of web development, the approach is relevant to other testing contexts too.

USE CASES

  • Test low-fidelity service or product concepts and prototypes.
  • Test high fidelity experienceable prototypes, close to final production.
  • Test variations of specific components, journeys, features, etc. for a product or service already in use, especially digital.

LIMITATIONS

It may not be feasible to comprehensively test each aspect of the design. The decision on what is to be tested, how many versions have to be tested, and why lies with the teams.

UNDERSTANDING THE TOOL

  • The ‘Purpose of Test’ section is meant to note down the key reason for which the test is being conducted. This could be drawn from one of the use cases.
  • The ‘Key Metrics of Success/Failure’ box is meant to note down the key metrics based on which test results will be evaluated. For example, time spent, signup, purchase, feedback score, etc.
  • Testing Formats’ is the section in which the format of the experiment has to be described. For example, visual concept test, interactive screen test, test with app users, etc. The test may involve studying actual use of a product or service, or may involve recording feedback for concepts (similar level to scenarios and card sort).
  • Testing Scope’ is the section in which the designer has to define how versions are different from each other, along with rationale on why the differences are relevant. This would include specifying constants and variables in each version.
  • Tester Profile’ is the section in which the designer needs to identify who the different versions are to be tested with. Usually, in A/B testing there is a ‘Control’ group that a base version is tested with, and ‘Treatment’ groups with which different versions are tested.
  • Test Learnings’ is the section in which one has to note down the results of testing, in terms of metrics achieved and feedback received from testers.
  • Note: Tests are considered valid if different groups of a single user type test all versions (for example, within existing customers — groups 1,2,n test versions A,B,n). Tests are also valid if different versions are tested with one group of one customer type (for example, group 1 of customers test version A,B,n). Tests are invalid if different versions are tested with different types of users (for example, A with existing customers, B with new customers, C with not customers), as relevant comparisons cannot be made.

STEP BY STEP

  1. Familiarise yourself: Read through the template and discuss the testing requirements. Note down the purpose of the test and key metrics.
  2. Outline versions, format and scope: Note down the format that the test would take and the kind of assets that would need to be developed (if not done already). Make a note of how the versions differs from each other.
  3. Identify tester profile: Note down the kind of testers and users the testing will be focused on. Profiles, segments, groups, etc. can be mentioned here along with the sample size.
  4. Conduct the test: Use the tool as guidance as you go about the test.
  5. Test learnings and review: Once the tests are done, note down the results and learnings in this section. If a version meets the purpose, move ahead. Otherwise, discuss iteration of versions and further testing needs.

HOW TO FOR FACILITATORS

  1. At the start: Make sure the participants understand the goal of the activity and the direction. Refer to facilitation questions if they are feeling stuck.
  2. During the exercise: Start with a discussion on testing approaches that each of the teams have used before. Walk through the tool and its components. Discuss why testing is required and what is the most crucial aspect to test.
  3. At the close: Have participants walk you through the worksheet they have filled out. Probe them on their final testing plans, versions and approaches.

FACILITATORS QUESTION BANK

  • What are some testing approaches you may have used before? Does anyone know what A/B/n testing is?
  • What is the purpose of the testing you want to do? Do your prototypes reflect that?
  • How will you measure testing results? What metrics are key?
  • What is the format of testing? Are you asking respondents to use different versions of your product or service? Or are you presenting concepts and asking them to provide feedback?
  • How do your versions differ from each other? What remains constant? What are some variables?
  • Who are you going to test with? Do you have clarity on the different types of customers? Are you going to test with all or just a specific type?
  • Are you going to take all versions to some users? Who are these and why do you think that is useful?
  • How much time will you need to get testing done? Is that time practical?
  • How will you record results?